Simple algorithms run tocompletion in a shrt tme on a single computer. If the computer has a fault, you simply start agaian. However big data analysis often run for long periods on large numbers of cloud computers and failure of one of the computers becoems likely if not inevitable. Its is therefore critical that the algiorithsm used are robust to failure, nsuring that if one processor fails it is possible to detect this and redo its work without needing to modify the behaviour of more than a small number of other processors. A key aspect of MapReduce and Apace Hadoop is this abilty to 'pick themselves up' after processor failures.
Used on page 168